5 research outputs found

    Neurophysiological models of gaze control in Humanoid Robotics

    Get PDF
    This work present a robotic implementation of a neurophysiological model of rapid orienting gaze shifts in humans, with the final goal of model parameters validation and tuning. The quantitative assessment of robot performance confirmed a good ability to foveate the target with low residual errors around the desired target position. Furthermore, the ability to maintain the desired position was good and the gaze fixation after the saccadic movement was executed with only few oscillations of the head and eye. This is because the model required a very high dynamic. 9.1. Robotic point of view The head and eye residual oscillations increase linearly with increasing amplitude. In Fig. 16 is evident that the residual gaze oscillation is less than head. This is explained with the compensation introduced by the eye oscillations which compensate the gaze which becomes more stable. We explain these findings by observing that the accelerations required to execute (or stopand-invert) the movement are very high especially for the eye movement. Even if the robotic head was designed to match the human performances (in terms of angle and velocities) in its present configuration it is still not capable produce such accelerations. This is particularly evident for the movement of the eye because the motor has to invert its rotation when the fixation point is first achieved. With respect to the timing of the movement it has been found that the results of the experiments are in close accordance to the data available on humans (Goossens and Van Opstal, 1997). The same conclusion may be drawn for the shapes of the coordinated movement that can be directly compared to the typical examples reported in Fig. 14. Figure 16, 17 show that the model is capable of providing inadequate control of the redundant platform. The system response is very fast, due to the robotic head platform design. TGst time take into account the problem of eye-head coordination and the very high acceleration. The head is voluntarily delayed less than 30 millisecond after eye movement, according to human physiology, by means of Ph block (Goossens and Van Opstal ,1997). 9.2. Neurophysiological point of view A typical robotic eye-head movement is shows in Fig. 14

    Implementation of a neurophysiological model of saccadic eye movements on an anthropomorphic robotic head

    Get PDF
    International audienceIn this paper we investigated the relevance of a robotic implementation in the development and validation of a neurophysiological model of the generation of saccadic eye movements. To this aim, a well-characterized model of the brainstem saccadic circuitry was implemented on a humanoid robot head with 7 degrees of freedom (DOFs), which was designed to mimic the human head in terms of the physical dimensions (i.e. geometry and masses), the kinematics (i.e. number of DOFs and ranges of motion), the dynamics (i.e. velocities and accelerations), and the functionality (i.e. the ocular movements of vergence, smooth pursuit and saccades). Our implementation makes the robot head execute saccadic eye movements upon a visual stimulus appearing in the periphery of the robot visual field, by reproducing the following steps: projection or the camera images onto collicular images, according to the modeled mapping between the retina and the superior colliculus (SC); transformation of the retinotopic coordinates of the stimulus obtained in the camera reference frame into their corresponding projections on the SC; spatio-temporal transformation of these coordinates according to what is known to happen in the brainstem saccade burst generator of primates; and execution of the eye movement by controlling one eye motor of the robot, in velocity. The capabilities of the robot head to execute saccadic movements have been tested with respect to the neurophysiological model implemented, in view of the use of this robotic implementation for validating and tuning the model itself, in further focused experimental trial

    A vestibular interface for natural control of steering locomotion of robotic artifacts: preliminary experiments with a robotic endoscope, Springer Tracts on Advanced Robotics

    No full text
    Abstract. This work addresses the problem of developing novel interfaces for robotic systems that can allow the most natural transmission of control commands and sensory information, in the two directions. A novel approach to the development of natural interfaces is based on the detection of the human's motion intention, instead of the movement itself, as in traditional interfaces. Based on recent findings in neuroscience, the intention can be detected from anticipatory movements that naturally accompany more complex motor behaviors. This work is aimed at validating the hypothesis that head movements can be used to detect, slightly in advance, a person's intention to execute a steering during locomotion, and that a natural interface can be developed for controlling the navigation of a robotic artifact, based on this principle. A prototype 'vestibular' interface has been developed to this purpose, based on a 3-axial artificial vestibular system, developed by part of the authors for humanoid robotics applications. Three different experimental sessions have been carried out by using: (1) a driving video-game; (2) a robotic endoscope, with a 2-DOF steering tip; and (3) a mobile robot with a camera on-board. The experiments showed that anticipatory head movements occur even when the person is driving a device, like those used in the experiments, and that such head movements always anticipate commands to the input device. The results indicate that the proposed hypothesis is valid and that a further research effort is worthwhile in the direction of using this novel principle to develop natural interfaces, which in fact can be very useful in many tasks, with different devices
    corecore